Convergence to the Time Average by Stochastic Regularization

نویسندگان
چکیده

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Convergence of stochastic average approximation for stochastic optimization problems with mixed expectation and per-scenario constraints

We present a framework for ensuring convergence of sample average approximations to stochastic optimization problems that include expectation constraints in addition to per-scenario constraints.

متن کامل

Multi-Agent Average Convergence

Let I = [0, N − 1] denote the set of indices. We define the set of states S to contain all maps of type I → R for fixed positive N ∈ N (representing the number of agents). For each s ∈ S, i ∈ I, s(i) represents the value held by the agent with ID i. We denote the arbitrarily fixed start state s0. We introduce the function g whose domain MR is the set of non-empty finite multisets of real number...

متن کامل

On the convergence of weighted-average consensus

In this note we give sufficient conditions for the convergence of the iterative algorithm called weighted-average consensus in directed graphs. We study the discrete-time form of this algorithm. We use standard techniques from matrix theory to prove the main result. As a particular case one can obtain well-known results for non-weighted average consensus. We also give a corollary for undirected...

متن کامل

A convergence analysis of regularization by discretization in preimage space

In this paper we investigate the regularizing properties of discretization in preimage space for linear and nonlinear ill-posed operator equations with noisy data. We propose to choose the discretization level, that acts as a regularization parameter in this context, by a discrepancy principle. While general convergence has been shown not to hold (see [17]), we provide convergence results under...

متن کامل

Optimal Convergence for Distributed Learning with Stochastic Gradient Methods and Spectral-Regularization Algorithms

We study generalization properties of distributed algorithms in the setting of nonparametric regression over a reproducing kernel Hilbert space (RKHS). We first investigate distributed stochastic gradient methods (SGM), with mini-batches and multi-passes over the data. We show that optimal generalization error bounds can be retained for distributed SGM provided that the partition level is not t...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Journal of Nonlinear Mathematical Physics

سال: 2021

ISSN: 1776-0852

DOI: 10.1080/14029251.2013.792465